Goto

Collaborating Authors

 declarative memory


Hey Pentti, We Did It Again!: Differentiable vector-symbolic types that prove polynomial termination

Tomkins-Flanagan, Eilene, Hanley, Connor, Kelly, Mary A.

arXiv.org Artificial Intelligence

We present a typed computer language, Doug, in which all typed programs may be proved to halt in polynomial time, encoded in a vector-symbolic architecture (VSA). Doug is just an encoding of the light linear functional programming language (LLFPL) described by (Schimanski2009, ch. 7). The types of Doug are encoded using a slot-value encoding scheme based on holographic declarative memory (HDM; Kelly, 2020). The terms of Doug are encoded using a variant of the Lisp VSA defined by (Flanagan, 2024). Doug allows for some points on the embedding space of a neural network to be interpreted as types, where the types of nearby points are similar both in structure and content. Types in Doug are therefore learnable by a neural network. Following (Chollet, 2019), (Card, 1983), and (Newell, 1981), we view skill as the application of a procedure, or program of action, that causes a goal to be satisfied. Skill acquisition may therefore be expressed as program synthesis. Using Doug, we hope to describe a form of learning of skilled behaviour that follows a human-like pace of skill acquisition (i.e., substantially faster than brute force; Heathcote, 2000), exceeding the efficiency of all currently existing approaches (Kaplan, 2020; Jones, 2021; Chollet, 2024). Our approach brings us one step closer to modeling human mental representations, as they must actually exist in the brain, and those representations' acquisition, as they are actually learned.


Robots Can Multitask Too: Integrating a Memory Architecture and LLMs for Enhanced Cross-Task Robot Action Generation

Ali, Hassan, Allgeuer, Philipp, Mazzola, Carlo, Belgiovine, Giulia, Kaplan, Burak Can, Wermter, Stefan

arXiv.org Artificial Intelligence

Abstract-- Large Language Models (LLMs) have been recently used in robot applications for grounding LLM commonsense reasoning with the robot's perception and physical abilities. In humanoid robots, memory also plays a critical role in fostering real-world embodiment and facilitating long-term interactive capabilities, especially in multi-task setups where the robot must remember previous task states, environment states, and executed actions. In this paper, we address incorporating memory processes with LLMs for generating cross-task robot actions, while the robot effectively switches between tasks. Our proposed dual-layered architecture features two LLMs, utilizing their complementary skills of reasoning and following instructions, combined with a memory model inspired by human cognition. Our results show a significant improvement in performance over a baseline of five robotic tasks, demonstrating the potential of integrating memory with LLMs for combining the robot's action and perception for adaptive task execution. I. INTRODUCTION Despite the physical limitations due to their embodiment, humanoid robots are particularly effective tools because of their anthropomorphic shape, which can significantly improve Nevertheless, LLM reasoning alone is environments designed for human interaction [1]. Moreover, not yet sufficient for implementing the cognitive system the humanoid physical shape supports collaborating with humans of embodied artificial agents, capable of solving complex whose legibility and predictability of robot actions are tasks and interacting with humans.


A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian Learning and Free Energy Minimization

Ororbia, Alexander, Kelly, Mary Alexandria

arXiv.org Artificial Intelligence

Over the last few years, large neural generative models, capable of synthesizing semantically rich passages of text or producing complex images, have recently emerged as a popular representation of what has come to be known as ``generative artificial intelligence'' (generative AI). Beyond opening the door to new opportunities as well as challenges for the domain of statistical machine learning, the rising popularity of generative AI brings with it interesting questions for Cognitive Science, which seeks to discover the nature of the processes that underpin minds and brains as well as to understand how such functionality might be acquired and instantianted in biological (or artificial) substrate. With this goal in mind, we argue that a promising research program lies in the crafting of cognitive architectures, a long-standing tradition of the field, cast fundamentally in terms of neuro-mimetic generative building blocks. Concretely, we discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition in terms of Hebbian adaptation operating in service of optimizing a variational free energy functional.


AAAI 2022 Fall Symposium: System-1 and System-2 realized within the Common Model of Cognition

Conway-Smith, Brendan, West, Robert L.

arXiv.org Artificial Intelligence

Attempts to import dual-system descriptions of System-1 and System-2 into AI have been hindered by a lack of clarity over their distinction. We address this and other issues by situating System-1 and System-2 within the Common Model of Cognition. Results show that what are thought to be distinctive characteristics of System-1 and 2 instead form a spectrum of cognitive properties. The Common Model provides a comprehensive vision of the computational units involved in System-1 and System-2, their underlying mechanisms, and the implications for learning, metacognition, and emotion.


Representational Tenets for Memory Athletics

Schmidt, Kevin, Larue, Othalia, Kulhanek, Ray, Flaute, Dylan, Veliche, Razvan, Manasseh, Christian, Dellis, Nelson, Clouse, Scott, Culbertson, Jared, Rogers, Steve

arXiv.org Artificial Intelligence

We describe the current state of world-class memory competitions, including the methods used to prepare for and compete in memory competitions, based on the subjective report of World Memory Championship Grandmaster and co-author Nelson Dellis. We then explore the reported experiences through the lens of the Simulated, Situated, and Structurally coherent Qualia (S3Q) theory of consciousness, in order to propose a set of experiments to help further understand the boundaries of expert memory performance.


An Analysis and Comparison of ACT-R and Soar

Laird, John E.

arXiv.org Artificial Intelligence

This is a detailed analysis and comparison of the ACT-R and Soar cognitive architectures, including their overall structure, their representations of agent data and metadata, and their associated processing. It focuses on working memory, procedural memory, and long-term declarative memory. I emphasize the commonalities, which are many, but also highlight the differences. I identify the processes and distinct classes of information used by these architectures, including agent data, metadata, and meta-process data, and explore the roles that metadata play in decision making, memory retrievals, and learning.


The human memory--facts and information

National Geographic

From the moment we are born, our brains are bombarded by an immense amount of information about ourselves and the world around us. So, how do we hold on to everything we've learned and experienced? Humans retain different types of memories for different lengths of time. We also have a working memory, which lets us keep something in our minds for a limited time by repeating it. Whenever you say a phone number to yourself over and over to remember it, you're using your working memory.


Understanding Attention: In Minds and Machines

Sawant, Shriraj P., Singh, Shruti

arXiv.org Artificial Intelligence

Attention is a complex and broad concept, studied across multiple disciplines spanning artificial intelligence, cognitive science, psychology, neuroscience, and related fields. Although many of the ideas regarding attention do not significantly overlap among these fields, there is a common theme of adaptive control of limited resources. In this work, we review the concept and variants of attention in artificial neural networks (ANNs). We also discuss the origin of attention from the neuroscience point of view parallel to that of ANNs. Instead of having seemingly disconnected dialogues between varied disciplines, we suggest grounding the ideas on common conceptual frameworks for a systematic analysis of attention and towards possible unification of ideas in AI and Neuroscience.


Person, Woman, Man, Camera, TV - Issue 93: Forerunners

Nautilus

Imagine that someone asked you to come up with a sequence of five words. In any other year, some idiosyncratic combination would likely come to mind. This year, though, one five-word sequence that has been etched into the memories of many Americans, and many worldwide, stands out--"person, woman, man, camera, TV." Donald Trump, touting his ability to memorize these words as part of a cognitive health test, made the sequence famous. We can tie together our personal experiences and acquired knowledge--such as this memory of Trump's behavior--into interconnected memories, recallable at a moment's notice.


The information our brain needs to learn a language could almost fit on a floppy disk

Daily Mail - Science & tech

To master English as a native speaker, the average adult has to learn almost as much information as the contents of a full floppy disk, experts estimate. That amount of information translates to 12.5 million bits or roughly 1.5 megabytes (mb), while the iconic storage device holds 1.44mb of information. The data is mostly in the form of word definitions rather than complex structures like grammar. This is the first time that researchers have tried try to work out the amount of information our brains need to store in order to master a single language. Researchers from the University of Rochester in New York analysed different aspects of language learning and found the average learner acquires nearly 2,000 bits of information about how language works daily.